What We'll Cover
This week we turn from how AI changes the way you find and evaluate research (Week 5) to how it changes the way you write and communicate that research. We begin with what might be the most important idea of the entire week: writing is not just output. It is a cognitive process — a form of thinking that cannot be separated from the ideas it produces.
When you struggle to put an argument into words, that struggle is not wasted effort. It is the mechanism by which you discover what you actually think, where your reasoning breaks down, and what you still need to learn. This is not a romantic notion about the craft of writing. It is a claim backed by cognitive science, and it has profound implications for how you use AI writing tools.
In this session, we will examine the cognitive science behind writing-as-thinking, look at emerging research on what happens when AI disrupts this process, map out a spectrum of AI assistance from low-risk to high-risk, and connect all of this back to the virtue ethics framework you developed in Week 4. The goal is not to scare you away from AI writing tools — it is to help you use them in ways that make you a better thinker, not a better delegator.
🧠 Writing IS Thinking
The relationship between writing and thinking is not merely correlational — it is constitutive. You do not first think clearly and then write down what you have thought. The act of writing itself forces you to clarify vague intuitions, test the logical connections between ideas, and confront gaps in your understanding that were invisible while the ideas remained in your head.
⚙️ The Cognitive Science of Writing-as-Thinking
Cognitive scientists have long understood that writing is not a transcription activity but a knowledge-transforming activity. When you write, you are forced to linearise thoughts that exist as messy, interconnected webs in your mind. This linearisation process — choosing what comes first, what follows, what supports what — is itself a form of reasoning. You cannot organise an argument on paper without first organising it in your mind, and the attempt to organise it on paper frequently reveals that your mental organisation was less coherent than you believed.
Writing also imposes a form of self-explanation. When you write for a reader (even an imagined one), you must make explicit the assumptions, definitions, and logical steps that you take for granted in internal thought. This externalisation is where many of the deepest insights occur. A researcher who says "I knew this, but I only really understood it when I had to write it down" is describing a genuine cognitive phenomenon, not a figure of speech.
The implication is significant: if you skip the writing, you skip the thinking. A perfectly polished paragraph produced by AI may communicate an idea effectively to a reader, but it has not put you through the cognitive process that would have deepened your own understanding of that idea. The product looks the same; the intellectual development is entirely absent.
Recent research has begun to quantify what happens when AI disrupts this cognitive process. The findings are sobering.
🧠 Reduced Cognitive Engagement
A 2025 article in the Harvard Gazette reported on research exploring whether AI use is associated with reduced cognitive effort. The concern is that when AI handles the heavy lifting of formulating ideas into prose, users engage in less deep processing — the kind of effortful thinking that leads to genuine understanding and long-term retention. The question the researchers raise is direct: is AI making us intellectually lazier?
📈 Brain Activity and ChatGPT
Researchers at the MIT Media Lab conducted a study ("Your Brain on ChatGPT") measuring neural activity in participants while they used ChatGPT for writing tasks. The study found that ChatGPT users showed the lowest levels of brain engagement compared to control groups, with reduced alpha and beta connectivity indicating under-engagement. The sample size was relatively small (54 participants) and the findings should be interpreted with appropriate caution. But they align with the broader theoretical concern: offloading the cognitive work of writing to AI may reduce the depth of thinking that writers engage in.
👫 The Cognitive Dissonance
Beyond the question of reduced thinking, researchers are documenting a specific psychological tension that emerges when students and academics use AI for writing — a form of cognitive dissonance that goes to the heart of scholarly identity.
📰 The Dissonance Between "Sounds Better" and "Sounds Like Me"
A 2025 study published in Frontiers in AI conducted a conceptual exploration of what the authors call "generative AI-induced cognitive dissonance" in university-level academic writing. The researchers identified a specific tension: students recognise that AI-generated text often sounds more polished and professional than their own writing, yet they also feel that this text does not represent their thinking, their voice, or their intellectual development.
This is not a trivial discomfort. For researchers, your writing voice is inseparable from your intellectual identity. The way you frame a problem, the analogies you reach for, the hedging language you use — these reflect how you think, not just what you think. When AI produces text that is technically superior but intellectually hollow (from your perspective), you are faced with an uncomfortable choice: submit something that sounds better but is not yours, or submit something that sounds worse but represents genuine intellectual engagement.
The study argues that this dissonance, if unresolved, can lead researchers to gradually cede more writing to AI — not because they believe it produces better scholarship, but because the polished output creates social pressure to match that standard, even at the cost of genuine engagement with ideas.
The First Draft Trap
One of the most common uses of AI in academic writing is generating a first draft. On the surface, this seems efficient: let AI produce a rough version, then edit it into shape. But there is a fundamental problem with this approach that goes beyond questions of originality.
- Writing a first draft is generative; editing is reactive. When you write your own first draft, you are generating ideas, testing structures, making choices about what to include and exclude. When you edit AI-generated text, you are reacting to someone else's choices — accepting, rejecting, or modifying a framework that is not yours.
- AI's draft shapes your thinking, not the reverse. Once you have read an AI-generated draft, you cannot un-read it. The structure it chose, the arguments it emphasised, the examples it selected — these become the frame through which you view the topic. Your "editing" is now constrained by the AI's initial choices, even when you disagree with them.
- The hardest and most valuable part of writing is the beginning. Deciding what to say, how to organise it, what the central argument is — these are the decisions that reflect the deepest thinking. If AI makes these decisions for you, the subsequent editing, no matter how thorough, cannot recover the intellectual work that was skipped.
- Editing AI prose trains a different skill than writing. Becoming an excellent editor of AI-generated text is a useful ability, but it is not the same as becoming an excellent thinker and writer. The two skills develop different cognitive capacities, and a researcher who can only edit — but not generate from scratch — is missing a fundamental intellectual capability.
The Dependency Risk
A 2025 synthesis published in Frontiers in Education reviewed recent evidence (2023–2025) on the impact of generative AI on academic reading and writing. Among the patterns identified was a concerning trend toward student dependency on AI outputs, coupled with surface-level engagement with the material. Students who regularly use AI for writing tasks showed reduced willingness to engage in the kind of deep, sustained, often uncomfortable cognitive work that characterises genuine academic inquiry.
📐 A Spectrum of AI Assistance
Not all uses of AI in writing are equal. There is a meaningful difference between using AI to fix your typos and using AI to generate your arguments. The following spectrum maps out different levels of AI assistance, from genuinely helpful to genuinely harmful to your development as a researcher.
✅ Proofreading and Grammar
Low Risk
Fixing typos, correcting grammar, adjusting punctuation. You have done all the thinking — the argument is yours, the structure is yours, the voice is yours. AI polishes the surface without touching the substance. This is the equivalent of a spell-checker, and there is no meaningful cognitive cost. Your ideas remain entirely your own; AI simply ensures they are presented without distracting mechanical errors.
📝 Language Enhancement
Low–Medium Risk
Improving clarity, smoothing flow, refining word choice. The ideas and structure remain yours, but AI helps you communicate them more effectively. This is particularly valuable for non-native English speakers (which we will explore in Sub-Lesson 3). The risk is modest: you might lose some of your distinctive voice, and you might not develop your own instincts for clear prose. But the core intellectual work — the thinking — is still yours.
🔨 Restructuring
Medium Risk
AI suggests a better organisation for your arguments. The content is still yours, but AI is now shaping how your arguments are presented — what comes first, how sections relate, where emphasis falls. Structure is not neutral: the order in which you present ideas affects how a reader (and you) understand the relationships between them. When AI restructures your writing, it is making argumentative choices on your behalf, even if you approve them afterward.
💥 Generating Arguments
High Risk
AI produces the reasoning — the claims, the evidence selection, the logical connections. This is where the line between assistance and substitution becomes dangerously blurry. If AI generates an argument, whose argument is it? Can you defend it under questioning? Do you understand why this particular evidence supports this particular claim, or are you trusting that the AI got it right? If you cannot articulate the reasoning without looking at what the AI wrote, the argument is not yours.
🚫 Generating Entire Sections
Very High Risk
AI writes substantial portions of your work — paragraphs, sections, or entire chapters. This is not assistance. This is substitution. The AI has done the thinking, made the structural choices, selected the evidence, and crafted the prose. Your role has been reduced from author to editor-of-someone-else's-work, and the "someone else" is a statistical model that does not understand the content it has produced. You are submitting work that you did not intellectually engage with in any meaningful way.
👍 When AI Writing Assistance Genuinely Helps
The point of this session is not to argue that AI should never be used for writing. There are genuine cases where AI writing assistance can help researchers produce better work without sacrificing the cognitive benefits of the writing process. The key is to use AI in ways that support your thinking rather than replace it.
💬 Overcoming Writer's Block
Sometimes the hardest part of writing is starting. When you are staring at a blank page, paralysed by the gap between the complexity in your head and the empty document in front of you, AI can serve as a conversation partner. Not to write for you, but to help you think out loud. Explain your ideas to Claude as if to a colleague. Ask it to push back, to ask clarifying questions, to identify where your reasoning is unclear. Use AI as a thinking tool — a way to externalise your thoughts — and then write the actual text yourself.
🌐 Language Polishing for Non-Native Speakers
If English is not your first language (we will explore this in depth in Sub-Lesson 3), AI can help you express ideas that are already fully formed in your mind but difficult to articulate in academic English. This is one of the strongest legitimate use cases: you have done the thinking, you know what you want to say, and AI helps you say it in a language that is not your own. The intellectual work remains entirely yours; the linguistic barrier is lowered.
📝 Summarising Your Own Work
You have written a 10,000-word thesis chapter and need a 200-word abstract. You have produced a technical paper and need a plain-language summary for a public audience. These are translation tasks — expressing the same ideas at different levels of detail or for different audiences. AI can be genuinely useful here because the ideas are already yours; you are not generating new thinking, you are reformatting existing thinking.
🔍 Getting Feedback on Clarity
After you have written a draft yourself, asking AI to identify passages that are unclear, arguments that seem unsupported, or transitions that feel abrupt can be valuable feedback. This is analogous to asking a colleague to read your draft — the thinking and writing are yours, and AI provides a reader's perspective on how effectively you have communicated. The critical difference: you wrote the draft first. The thinking happened. AI responds to your work, rather than generating it.
📚 Connection to Virtue Ethics
In Week 4, we explored multiple ethical frameworks for thinking about AI in research. One of the most relevant to this week's topic is virtue ethics — the framework that asks not "what should I do?" but "what kind of person am I becoming?"
- Efficiency versus avoidance. There is a meaningful difference between using AI to be more efficient (doing the same cognitive work in less time) and using AI to avoid cognitive work entirely. Proofreading is efficiency. Generating your arguments is avoidance. The virtue ethics lens helps you see this distinction clearly: efficiency preserves your development as a thinker; avoidance undermines it.
- The compound effect. One AI-generated paragraph seems harmless. But over the course of a degree — hundreds of paragraphs, dozens of assignments, years of practice — the cumulative effect of skipping the cognitive work of writing is significant. A researcher who has spent three years editing AI output has developed fundamentally different capacities than one who has spent three years wrestling with their own prose. The difference may not be visible in any single submission, but it will be visible in the researcher they each become.
- Intellectual honesty. Virtue ethics also asks about honesty — not just whether you have violated a rule, but whether you are being honest with yourself about what you are doing and why. Using AI to generate content and then telling yourself you are "just using it as a starting point" is a form of self-deception if the truth is that AI did the thinking and you did the polishing. Being honest about what role AI played is a virtue that protects your own development.
- The ubuntu dimension. Recall from Week 4 that ubuntu ethics emphasises relational accountability. Your intellectual development does not only affect you. When you present AI-generated thinking as your own in a seminar, a peer review, or a collaboration, you are affecting the intellectual community around you. Your colleagues are engaging with ideas they believe represent your genuine understanding. If those ideas are AI-generated, the relational trust that scholarship depends on is undermined.
📚 Readings
Core Readings
📰 Harvard Gazette (2025): "Is AI dulling our minds?"
Explores research on the cognitive effects of AI use, including concerns about reduced deep thinking and intellectual engagement when AI handles cognitively demanding tasks. A broad, accessible introduction to the central concern of this session.
📰 Frontiers in AI (2025): "A conceptual exploration of generative AI-induced cognitive dissonance and its emergence in university-level academic writing"
A peer-reviewed study examining the psychological tension students experience when AI-generated text sounds more polished than their own writing but does not represent their thinking. Introduces the concept of cognitive dissonance in the AI writing context and explores its implications for academic identity and intellectual development.
📰 Frontiers in Education (2025): "The impact of generative AI on academic reading and writing: a synthesis of recent evidence (2023–2025)"
A synthesis of recent research evidence on how generative AI is changing academic reading and writing practices. Identifies patterns including student dependency on AI outputs, surface-level engagement, and the tension between productivity gains and learning losses. Essential reading for understanding the landscape of current evidence.
Supplementary Readings
📖 Psychology Today (2025): "How AI Impacts Academic Thinking, Writing and Learning"
An accessible overview of the intersection between AI tools and cognitive processes in academic settings. Discusses how reliance on AI for writing tasks may affect the development of critical thinking and independent reasoning skills.
📖 Holmner et al. (2025): "The Future of Academic Writing in the Age of Generative AI"
Published in the Proceedings of the Association for Information Science and Technology (ASIS&T), this paper examines how generative AI is reshaping the landscape of academic writing and what this means for the future of scholarly communication.
Key Takeaways
- Writing is a cognitive process, not just a communication tool. The act of writing forces you to clarify, structure, and test your ideas in ways that thinking alone does not. When you skip the writing, you skip a critical form of intellectual development.
- AI creates a real cognitive dissonance for academic writers. Students and researchers feel torn between text that sounds polished (AI) and text that represents their actual thinking (their own). This tension, if unresolved, can push writers toward dependency rather than development.
- The "first draft trap" is real. If AI writes your first draft, you edit rather than think. Editing someone else's prose — even AI prose — is fundamentally different from generating your own. The hardest and most valuable cognitive work happens at the beginning, not during the polish.
- AI writing assistance exists on a spectrum. Proofreading is low risk. Language enhancement is modest risk. Restructuring starts to encroach on your thinking. Generating arguments is high risk. Generating entire sections is substitution, not assistance.
- Even legitimate uses require you to remain the thinker. Overcoming writer's block, polishing non-native language, summarising your own work, and getting feedback on clarity are all valid uses — but only if you have done the intellectual work first.
- Virtue ethics asks: what kind of researcher are you becoming? The cumulative effect of offloading cognitive work to AI shapes not just your output but your intellectual capacities. Efficiency is not the same as avoidance. The researcher you become in three years depends on the cognitive work you do (or do not do) today.